794 research outputs found

    Pharmacoeconomic evaluations and primary care prescribing

    Get PDF
    This study aimed to investigate the effect of incorporating adverse drug reactions in economic analyses of drug therapies. Subsequently, the impact of this information on prescribing in primary care is explored. In order to achieve the aims of the study, three main studies were conducted. In the first study, an economic analysis was conducted to estimate the comparative costs of a large UK population (N = 98 887) given nonsteroidal anti-inflammatory drug (NSAID) therapy alone and in combination of gastrointestinal (GI) protective agents including concomitant prescriptions of H2 blockers, omeprazole and misoprostol. The second study was a pharmacoeconomic analysis, using data from the literature and local expert opinion, of three commonly prescribed classes of drugs in primary care – NSAIDs, selective serotonin reuptake inhibitors (SSRIs) and angiotensin converting enzyme (ACE) inhibitors for the treatment of rheumatoid arthritis, depression and hypertension respectively. Finally, the results from the pharmacoeconomic analysis were disseminated to GPs in a local Health Board to explore the impact on influencing primary care prescribing. Economic analyses based on various data sources have shown that the total cost of drug therapies are often much higher than the purchasing cost alone. There is much value in taking into account the clinical and economic impact of drug-induced ADRs when conducting pharmacoeconomic evaluations. However, this is often restricted by the availability of some of the data that are required to complete the economic model. The necessary data do exist, but linked clinical data for this type of analysis are not readily available for research purposes. General practitioners were generally supportive of economic evaluations and the exploratory study on disseminating pharmacoeconomic information. However, the dissemination exercise had failed to demonstrate a positive relationship. In addition to the barriers highlighted in the literature, it was found that GPs do not feel that there is a role for the implementation of economic information in primary care

    Towards a transparent, credible, evidence-based decision-making process of new drug listing on the Hong Kong Hospital Authority Drug Formulary: challenges and suggestions

    Get PDF
    The aim of this article is to describe the process, evaluation criteria, and possible outcomes of decision-making for new drugs listed in the Hong Kong Hospital Authority Drug Formulary in comparison to the health technology assessment (HTA) policy overseas. Details of decision-making processes including the new drug listing submission, Drug Advisory Committee (DAC) meeting, and procedures prior to and following the meeting, were extracted from the official Hong Kong Hospital Authority drug formulary management website and manual. Publicly-available information related to the new drug decision-making process for five HTA agencies [the National Institute of Health and Care Excellence (NICE), the Scottish Medicines Consortium (SMC), the Australia Pharmaceutical Benefits Advisory Committee (PBAC), the Canadian Agency for Drugs and Technologies in Health (CADTH), and the New Zealand Pharmaceutical Management Agency (PHARMAC)] were reviewed and retrieved from official documents from public domains. The DAC is in charge of systemically and critically appraising new drugs before they are listed on the formulary, reviewing submitted applications, and making the decision to list the drug based on scientific evidence to which safety, efficacy, and cost-effectiveness are the primary considerations. When compared with other HTA agencies, transparency of the decision-making process of the DAC, the relevance of clinical and health economic evidence, and the lack of health economic and methodological input of submissions are the major challenges to the new-drug listing policy in Hong Kong. Despite these challenges, this review provides suggestions for the establishment of a more transparent, credible, and evidence-based decision-making process in the Hong Kong Hospital Authority Drug Formulary. Proposals for improvement in the listing of new drugs in the formulary should be a priority of healthcare reforms

    How do diabetes models measure up? A review of diabetes economic models and ADA guidelines

    Get PDF
    Introduction: Economic models and computer simulation models have been used for assessing short-term cost-effectiveness of interventions and modelling long-term outcomes and costs. Several guidelines and checklists have been published to improve the methods and reporting. This article presents an overview of published diabetes models with a focus on how well the models are described in relation to the considerations described by the American Diabetes Association (ADA) guidelines. Methods: Relevant electronic databases and National Institute for Health and Care Excellence (NICE) guidelines were searched in December 2012. Studies were included in the review if they estimated lifetime outcomes for patients with type 1 or type 2 diabetes. Only unique models, and only the original papers were included in the review. If additional information was reported in subsequent or paired articles, then additional citations were included. References and forward citations of relevant articles, including the previous systematic reviews were searched using a similar method to pearl growing. Four principal areas were included in the ADA guidance reporting for models: transparency, validation, uncertainty, and diabetes specific criteria. Results: A total 19 models were included. Twelve models investigated type 2 diabetes, two developed type 1 models, two created separate models for type 1 and type 2, and three developed joint type 1 and type 2 models. Most models were developed in the United States, United Kingdom, Europe or Canada. Later models use data or methods from earlier models for development or validation. There are four main types of models: Markov-based cohort, Markov-based microsimulations, discrete-time microsimulations, and continuous time differential equations. All models were long-term diabetes models incorporating a wide range of compilations from various organ systems. In early diabetes modelling, before the ADA guidelines were published, most models did not include descriptions of all the diabetes specific components of the ADA guidelines but this improved significantly by 2004. Conclusion: A clear, descriptive short summary of the model was often lacking. Descriptions of model validation and uncertainty were the most poorly reported of the four main areas, but there exist conferences focussing specifically on the issue of validation. Interdependence between the complications was the least well incorporated or reported of the diabetes-specific criterion

    Developing and validating a predictive model for stroke progression

    Get PDF
    <p><b>Background:</b> Progression is believed to be a common and important complication in acute stroke, and has been associated with increased mortality and morbidity. Reliable identification of predictors of early neurological deterioration could potentially benefit routine clinical care. The aim of this study was to identify predictors of early stroke progression using two independent patient cohorts.</p> <p><b>Methods:</b> Two patient cohorts were used for this study – the first cohort formed the training data set, which included consecutive patients admitted to an urban teaching hospital between 2000 and 2002, and the second cohort formed the test data set, which included patients admitted to the same hospital between 2003 and 2004. A standard definition of stroke progression was used. The first cohort (n = 863) was used to develop the model. Variables that were statistically significant (p < 0.1) on univariate analysis were included in the multivariate model. Logistic regression was the technique employed using backward stepwise regression to drop the least significant variables (p > 0.1) in turn. The second cohort (n = 216) was used to test the performance of the model. The performance of the predictive model was assessed in terms of both calibration and discrimination. Multiple imputation methods were used for dealing with the missing values.</p> <p><b>Results:</b> Variables shown to be significant predictors of stroke progression were conscious level, history of coronary heart disease, presence of hyperosmolarity, CT lesion, living alone on admission, Oxfordshire Community Stroke Project classification, presence of pyrexia and smoking status. The model appears to have reasonable discriminative properties [the median receiver-operating characteristic curve value was 0.72 (range 0.72–0.73)] and to fit well with the observed data, which is indicated by the high goodness-of-fit p value [the median p value from the Hosmer-Lemeshow test was 0.90 (range 0.50–0.92)].</p> <p><b>Conclusion:</b> The predictive model developed in this study contains variables that can be easily collected in practice therefore increasing its usability in clinical practice. Using this analysis approach, the discrimination and calibration of the predictive model appear sufficiently high to provide accurate predictions. This study also offers some discussion around the validation of predictive models for wider use in clinical practice.</p&gt

    Association between patient outcomes and key performance indicators of stroke care quality: A systematic review and meta-analysis

    Get PDF
    Purpose: Translating research evidence into clinical practice often uses key performance indicators to monitor quality of care. We conducted a systematic review to identify the stroke key performance indicators used in large registries, and to estimate their association with patient outcomes. Method: We sought publications of recent (January 2000–May 2017) national or regional stroke registers reporting the association of key performance indicators with patient outcome (adjusting for age and stroke severity). We searched Ovid Medline, EMBASE and PubMed and screened references from bibliographies. We used an inverse variance random effects meta-analysis to estimate associations (odds ratio; 95% confidence interval) with death or poor outcome (death or disability) at the end of follow-up. Findings: We identified 30 eligible studies (324,409 patients). The commonest key performance indicators were swallowing/nutritional assessment, stroke unit admission, antiplatelet use for ischaemic stroke, brain imaging and anticoagulant use for ischaemic stroke with atrial fibrillation, lipid management, deep vein thrombosis prophylaxis and early physiotherapy/mobilisation. Lower case fatality was associated with stroke unit admission (odds ratio 0.79; 0.72–0.87), swallow/nutritional assessment (odds ratio 0.78; 0.66–0.92) and antiplatelet use for ischaemic stroke (odds ratio 0.61; 0.50–0.74) or anticoagulant use for ischaemic stroke with atrial fibrillation (odds ratio 0.51; 0.43–0.64), lipid management (odds ratio 0.52; 0.38–0.71) and early physiotherapy or mobilisation (odds ratio 0.78; 0.67–0.91). Reduced poor outcome was associated with adherence to swallowing/nutritional assessment (odds ratio 0.58; 0.43–0.78) and stroke unit admission (odds ratio 0.83; 0.77–0.89). Adherence with several key performance indicators appeared to have an additive benefit. Discussion: Adherence with common key performance indicators was consistently associated with a lower risk of death or disability after stroke. Conclusion: Policy makers and health care professionals should implement and monitor those key performance indicators supported by good evidence

    How methodological frameworks are being developed: evidence from a scoping review

    Get PDF
    Background: Although the benefits of using methodological frameworks are increasingly recognised, to date, there is no formal definition of what constitutes a ‘methodological framework’, nor is there any published guidance on how to develop one. For the purposes of this study we have defined a methodological framework as a structured guide to completing a process or procedure. This study’s aims are to: (a) map the existing landscape on the use of methodological frameworks; (b) identify approaches used for the development of methodological frameworks and terminology used; and (c) provide suggestions for developing future methodological frameworks. We took a broad view and did not limit our study to methodological frameworks in research and academia. Methods: A scoping review was conducted, drawing on Arksey and O’Malley’s methods and more recent guidance. We systematically searched two major electronic databases (MEDLINE and Web of Science), as well as grey literature sources and the reference lists and citations of all relevant papers. Study characteristics and approaches used for development of methodological frameworks were extracted from included studies. Descriptive analysis was conducted. Results: We included a total of 30 studies, representing a wide range of subject areas. The most commonly reported approach for developing a methodological framework was ‘Based on existing methods and guidelines’ (66.7%), followed by ‘Refined and validated’ (33.3%), ‘Experience and expertise’ (30.0%), ‘Literature review’ (26.7%), ‘Data synthesis and amalgamation’ (23.3%), ‘Data extraction’ (10.0%), ‘Iteratively developed’ (6.7%) and ‘Lab work results’ (3.3%). There was no consistent use of terminology; diverse terms for methodological framework were used across and, interchangeably, within studies. Conclusions: Although no formal guidance exists on how to develop a methodological framework, this scoping review found an overall consensus in approaches used, which can be broadly divided into three phases: (a) identifying data to inform the methodological framework; (b) developing the methodological framework; and (c) validating, testing and refining the methodological framework. Based on these phases, we provide suggestions to facilitate the development of future methodological frameworks

    Economic Evaluation Plan (EEP) for A Very Early Rehabilitation Trial (AVERT): An international trial to compare the costs and cost-effectiveness of commencing out of bed standing and walking training (very early mobilization) within 24 h of stroke onset with usual stroke unit care

    Get PDF
    Rationale: A key objective of A Very Early Rehabilitation Trial is to determine if the intervention, very early mobilisation following stroke, is cost-effective. Resource use data were collected to enable an economic evaluation to be undertaken and a plan for the main economic analyses was written prior to the completion of follow up data collection. Aim and hypothesis To report methods used to collect resource use data, pre-specify the main economic evaluation analyses and report other intended exploratory analyses of resource use data. Sample size estimates: Recruitment to the trial has been completed. A total of 2,104 participants from 56 stroke units across three geographic regions participated in the trial. Methods and design: Resource use data were collected prospectively alongside the trial using standardised tools. The primary economic evaluation method is a cost-effectiveness analysis to compare resource use over 12 months with health outcomes of the intervention measured against a usual care comparator. A cost-utility analysis is also intended. Study outcome: The primary outcome in the cost-effectiveness analysis will be favourable outcome (modified Rankin Scale score 0-2) at 12 months. Cost-utility analysis will use health-related quality of life, reported as quality-adjusted life years gained over a 12 month period, as measured by the modified Rankin Scale and the Assessment of Quality of Life. Discussion: Outcomes of the economic evaluation analysis will inform the cost-effectiveness of very early mobilisation following stroke when compared to usual care. The exploratory analysis will report patterns of resource use in the first year following stroke

    How do diabetes models measure up? A review of diabetes economic models and ADA guidelines

    Get PDF
    Introduction: Economic models and computer simulation models have been used for assessing short-term cost-effectiveness of interventions and modelling long-term outcomes and costs. Several guidelines and checklists have been published to improve the methods and reporting. This article presents an overview of published diabetes models with a focus on how well the models are described in relation to the considerations described by the American Diabetes Association (ADA) guidelines. Methods: Relevant electronic databases and National Institute for Health and Care Excellence (NICE) guidelines were searched in December 2012. Studies were included in the review if they estimated lifetime outcomes for patients with type 1 or type 2 diabetes. Only unique models, and only the original papers were included in the review. If additional information was reported in subsequent or paired articles, then additional citations were included. References and forward citations of relevant articles, including the previous systematic reviews were searched using a similar method to pearl growing. Four principal areas were included in the ADA guidance reporting for models: transparency, validation, uncertainty, and diabetes specific criteria. Results: A total 19 models were included. Twelve models investigated type 2 diabetes, two developed type 1 models, two created separate models for type 1 and type 2, and three developed joint type 1 and type 2 models. Most models were developed in the United States, United Kingdom, Europe or Canada. Later models use data or methods from earlier models for development or validation. There are four main types of models: Markov-based cohort, Markov-based microsimulations, discrete-time microsimulations, and continuous time differential equations. All models were long-term diabetes models incorporating a wide range of compilations from various organ systems. In early diabetes modelling, before the ADA guidelines were published, most models did not include descriptions of all the diabetes specific components of the ADA guidelines but this improved significantly by 2004. Conclusion: A clear, descriptive short summary of the model was often lacking. Descriptions of model validation and uncertainty were the most poorly reported of the four main areas, but there exist conferences focussing specifically on the issue of validation. Interdependence between the complications was the least well incorporated or reported of the diabetes-specific criterion

    Comparative safety and effectiveness of direct oral anticoagulants in patients with atrial fibrillation in clinical practice in Scotland

    Get PDF
    To compare the clinical effectiveness and safety of direct oral anticoagulants (DOACs) in patients with atrial fibrillation (AF) in routine clinical practice. Retrospective cohort study using linked administrative data. The study population (n=14,577) included patients with a diagnosis of AF (confirmed in hospital) who initiated DOAC treatment in Scotland between August 2011 and December 2015. Multivariate Cox proportional hazard models were used to estimate hazard ratios of thromboembolic events, mortality, and bleeding events. No differences between the DOACs were observed in the risks of stroke, systemic embolism, or cardiovascular death. In contrast, the risk of myocardial infarction was higher among apixaban patients in comparison to rivaroxaban (1.67 [1.02 - 2.71]), and all-cause mortality was higher among rivaroxaban patients in contrast to both apixaban (1.22 [1.01 - 1.47]) and dabigatran (1.55 [1.16 - 2.05]); rivaroxaban patients also had a higher risk of pulmonary embolism than apixaban patients (5.27 [1.79 - 15.53]). The risk of other major bleeds was higher among rivaroxaban patients compared to apixaban (1.50 [1.10 - 2.03]) and dabigatran (1.58 [1.01 - 2.48]); the risks of gastro-intestinal bleeds and overall bleeding were higher among rivaroxaban patients than among apixaban patients (1.48 [1.01 - 2.16] and 1.52[1.21 - 1.92], respectively). All DOACs were similarly effective in preventing strokes and systemic embolisms, while patients being treated with rivaroxaban exhibited the highest bleeding risks. Observed differences in the risks of all-cause mortality, myocardial infarction, and pulmonary embolism warrant further research. [Abstract copyright: This article is protected by copyright. All rights reserved.

    Stroke care in Africa: a systematic review of the literature

    Get PDF
    Background: Appropriate systems of stroke care are important to manage the increasing death and disability associated with stroke in Africa. Information on existing stroke services in African countries is limited. Aim: To describe the status of stroke care in Africa. Summary of review: We undertook a systematic search of the published literature to identify recent (1 January 2006–20 June 2017) publications that described stroke care in any African country. Our initial search yielded 838 potential papers, of which 38 publications were eligible representing 14/54 African countries. Across the publications included for our review, the proportion of stroke patients reported to arrive at hospital within 3 h from stroke onset varied between 10% and 43%. The median time interval between stroke onset and hospital admission was 31 h. Poor awareness of stroke signs and symptoms, shortages of medical transportation, health care personnel, and stroke units, and the high cost of brain imaging, thrombolysis, and outpatient physiotherapy rehabilitation services were reported as major barriers to providing best-practice stroke care in Africa. Conclusions: This review provides an overview of stroke care in Africa, and highlights the paucity of available data. Stroke care in Africa usually fell below the recommended standards with variations across countries and settings. Combined efforts from policy makers and health care professionals in Africa are needed to improve, and ensure access, to organized stroke care in as many settings as possible. Mechanisms to routinely monitor usual care (i.e., registries or audits) are also needed to inform policy and practice
    • …
    corecore